74 research outputs found

    MeMaHand: Exploiting Mesh-Mano Interaction for Single Image Two-Hand Reconstruction

    Full text link
    Existing methods proposed for hand reconstruction tasks usually parameterize a generic 3D hand model or predict hand mesh positions directly. The parametric representations consisting of hand shapes and rotational poses are more stable, while the non-parametric methods can predict more accurate mesh positions. In this paper, we propose to reconstruct meshes and estimate MANO parameters of two hands from a single RGB image simultaneously to utilize the merits of two kinds of hand representations. To fulfill this target, we propose novel Mesh-Mano interaction blocks (MMIBs), which take mesh vertices positions and MANO parameters as two kinds of query tokens. MMIB consists of one graph residual block to aggregate local information and two transformer encoders to model long-range dependencies. The transformer encoders are equipped with different asymmetric attention masks to model the intra-hand and inter-hand attention, respectively. Moreover, we introduce the mesh alignment refinement module to further enhance the mesh-image alignment. Extensive experiments on the InterHand2.6M benchmark demonstrate promising results over the state-of-the-art hand reconstruction methods

    Dopamine Surface Modification of Trititanate Nanotubes: Proposed In‐Situ Structure Models

    Full text link
    Two models for self‐assembled dopamine on the surface of trititanate nanotubes are proposed: individual monomer units linked by π–π stacking of the aromatic regions and mono‐attached units interacting through hydrogen bonds. This was investigated with solid state NMR spectroscopy studies and powder X‐ray diffraction.Double bind: Two models for self‐assembled dopamine on the surface of trititanate nanotubes are proposed: individual trimer units linked by π–π stacking of the aromatic regions and mono‐attached units interacting through hydrogen bonds. This was investigated by solid state NMR spectroscopy studies and powder X‐ray diffraction.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/137420/1/chem201600075.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/137420/2/chem201600075_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/137420/3/chem201600075-sup-0001-misc_information.pd

    Batch-based Model Registration for Fast 3D Sherd Reconstruction

    Full text link
    3D reconstruction techniques have widely been used for digital documentation of archaeological fragments. However, efficient digital capture of fragments remains as a challenge. In this work, we aim to develop a portable, high-throughput, and accurate reconstruction system for efficient digitization of fragments excavated in archaeological sites. To realize high-throughput digitization of large numbers of objects, an effective strategy is to perform scanning and reconstruction in batches. However, effective batch-based scanning and reconstruction face two key challenges: 1) how to correlate partial scans of the same object from multiple batch scans, and 2) how to register and reconstruct complete models from partial scans that exhibit only small overlaps. To tackle these two challenges, we develop a new batch-based matching algorithm that pairs the front and back sides of the fragments, and a new Bilateral Boundary ICP algorithm that can register partial scans sharing very narrow overlapping regions. Extensive validation in labs and testing in excavation sites demonstrate that these designs enable efficient batch-based scanning for fragments. We show that such a batch-based scanning and reconstruction pipeline can have immediate applications on digitizing sherds in archaeological excavations. Our project page: https://jiepengwang.github.io/FIRES/.Comment: Project page: https://jiepengwang.github.io/FIRES

    An Implicit Parametric Morphable Dental Model

    Get PDF
    3D Morphable models of the human body capture variations among subjects and are useful in reconstruction and editing applications. Current dental models use an explicit mesh scene representation and model only the teeth, ignoring the gum. In this work, we present the first parametric 3D morphable dental model for both teeth and gum. Our model uses an implicit scene representation and is learned from rigidly aligned scans. It is based on a component-wise representation for each tooth and the gum, together with a learnable latent code for each of such components. It also learns a template shape thus enabling several applications such as segmentation, interpolation, and tooth replacement. Our reconstruction quality is on par with the most advanced global implicit representations while enabling novel applications. Project page: https://vcai.mpi-inf.mpg.de/projects/DMM

    Low-Power Redundant-Transition-Free TSPC Dual-Edge-Triggering Flip-Flop Using Single-Transistor-Clocked Buffer

    Get PDF
    In the modern graphics processing unit (GPU)/artificial intelligence (AI) era, flip-flop (FF) has become one of the most power-hungry blocks in processors. To address this issue, a novel single-phase-clock dual-edge-triggering (DET) FF using a single-transistor-clocked (STC) buffer (STCB) is proposed. The STCB uses a single-clocked transistor in the data sampling path, which completely removes clock redundant transitions (RTs) and internal RTs that exist in other DET designs. Verified by post-layout simulations in 22 nm fully depleted silicon on insulator (FD-SOI) CMOS, when operating at 10% switching activity, the proposed STC-DET outperforms prior state-of-the-art low-power DET in power consumption by 14% and 9.5%, at 0.4 and 0.8 V, respectively. It also achieves the lowest power-delay-product (PDP) among the DETs

    Giant Enhancement of Magnonic Frequency Combs by Exceptional Points

    Full text link
    With their incomparable time-frequency accuracy, frequency combs have significantly advanced precision spectroscopy, ultra-sensitive detection, and atomic clocks. Traditional methods to create photonic, phononic, and magnonic frequency combs hinge on material nonlinearities which are often weak, necessitating high power densities to surpass their initiation thresholds, which subsequently limits their applications. Here, we introduce a novel nonlinear process to efficiently generate magnonic frequency combs (MFCs) by exploiting exceptional points (EPs) in a coupled system comprising a pump-induced magnon mode and a Kittel mode. Even without any cavity, our method greatly improves the efficiency of nonlinear frequency conversion and achieves optimal MFCs at low pump power. Additionally, our novel nonlinear process enables excellent tunability of EPs using the polarization and power of the pump, simplifying MFC generation and manipulation. Our work establishes a synergistic relationship between non-Hermitian physics and MFCs, which is advantages for coherent/quantum information processing and ultra-sensitive detection.Comment: 7 pages, 4 figure

    HandPainter – 3D sketching in VR with hand-based physical proxy

    Get PDF
    3D sketching in virtual reality (VR) enables users to create 3D virtual objects intuitively and immersively. However, previous studies showed that mid-air drawing may lead to inaccurate sketches. To address this issue, we propose to use one hand as a canvas proxy and the index finger of the other hand as a 3D pen. To this end, we first perform a formative study to compare two-handed interaction with tablet-pen interaction for VR sketching. Based on the findings of this study, we design HandPainter, a VR sketching system which focuses on the direct use of two hands for 3D sketching without requesting any tablet, pen, or VR controller. Our implementation is based on a pair of VR gloves, which provide hand tracking and gesture capture. We devise a set of intuitive gestures to control various functionalities required during 3D sketching, such as canvas panning and drawing positioning. We show the effectiveness of HandPainter by presenting a number of sketching results and discussing the outcomes of a user study-based comparison with mid-air drawing and tablet-based sketching tools

    Surface Extraction from Neural Unsigned Distance Fields

    Full text link
    We propose a method, named DualMesh-UDF, to extract a surface from unsigned distance functions (UDFs), encoded by neural networks, or neural UDFs. Neural UDFs are becoming increasingly popular for surface representation because of their versatility in presenting surfaces with arbitrary topologies, as opposed to the signed distance function that is limited to representing a closed surface. However, the applications of neural UDFs are hindered by the notorious difficulty in extracting the target surfaces they represent. Recent methods for surface extraction from a neural UDF suffer from significant geometric errors or topological artifacts due to two main difficulties: (1) A UDF does not exhibit sign changes; and (2) A neural UDF typically has substantial approximation errors. DualMesh-UDF addresses these two difficulties. Specifically, given a neural UDF encoding a target surface Sˉ\bar{S} to be recovered, we first estimate the tangent planes of Sˉ\bar{S} at a set of sample points close to Sˉ\bar{S}. Next, we organize these sample points into local clusters, and for each local cluster, solve a linear least squares problem to determine a final surface point. These surface points are then connected to create the output mesh surface, which approximates the target surface. The robust estimation of the tangent planes of the target surface and the subsequent minimization problem constitute our core strategy, which contributes to the favorable performance of DualMesh-UDF over other competing methods. To efficiently implement this strategy, we employ an adaptive Octree. Within this framework, we estimate the location of a surface point in each of the octree cells identified as containing part of the target surface. Extensive experiments show that our method outperforms existing methods in terms of surface reconstruction quality while maintaining comparable computational efficiency.Comment: ICCV 202
    • …
    corecore